Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
We present an evaluation of standard image features in the context of long-term visual teach-and-repeat navigation of mobile robots, where the environment ...
In this paper, we show that contrastive learning methods can learn how the environment changes and improve the robustness of a VT&R framework. We apply a fully ...
This repository contains code for a low compute teach and repeat navigation approach which only requires monocular vision and wheel odometry.
In this work, we present a highly robust appearance-based visual teach & repeat navigation system for UAVs, based on a novel Semantic and Spatial Matching  ...
Another group of algorithms uses image features for mapping and navigation but relies on the planarity of the camera's motion to reduce the complexity of the ...
Apr 13, 2022 · In this work, we aim to use these learned representations for visual teach and repeat (VT&R) navigation, where a robot has to move autonomously ...
This project adresses visual-based navigation of mobile robots in outdoor environments. In particular, we adress the robustness of image features to seasonal  ...
Duckett,. “Image features for visual teach-and-repeat navigation in changing environments,” Robotics and Autonomous Systems, vol. 88, pp. 127–. 141, 2017. [10] ...
Kusumam, Image features for visual teach-and-repeat navigation in changing environments, Robot. Auton. Syst., № 88, с. 127; Clement, Robust Monocular Visual ...
Aug 29, 2018 · In part 2, we will be looking into what makes Visual Teach and Repeat different, potential applications and compatibility with other robots.